Goto

Collaborating Authors

 ai image generator


Google's and OpenAI's Chatbots Can Strip Women in Photos Down to Bikinis

WIRED

Users of AI image generators are offering each other instructions on how to use the tech to alter pictures of women into realistic, revealing deepfakes. Some users of popular chatbots are generating bikini deepfakes using photos of fully clothed women as their source material. Most of these fake images appear to be generated without the consent of the women in the photos. Some of these same users are also offering advice to others on how to use the generative AI tools to strip the clothes off of women in photos and make them appear to be wearing bikinis. Under a now-deleted Reddit post titled "gemini nsfw image generation is so easy," users traded tips for how to get Gemini, Google's generative AI model, to make pictures of women in revealing clothes.


Nvidia CEO Jensen Huang Is Bananas for Google Gemini's AI Image Generator

WIRED

Nvidia CEO Jensen Huang Is Bananas for Google Gemini's AI Image Generator The Nvidia CEO reveals his consuming love for Google's image generator, the artsy side of Grok, and what exactly he uses Perplexity, Gemini, and ChatGPT for right now. Nvidia CEO Jensen Huang is in London, standing in front of a room full of journalists, outing himself as a huge fan of Gemini's Nano Banana . "How could anyone not love Nano Banana? I mean Nano Banana, how good is that? Tell me it's not true!" "Tell me it's not true! I was just talking to Demis [Hassabis, CEO of DeepMind ] yesterday and I said'How about that Nano Banana! It looks like lots of people agree with him: The popularity of the Nano Banana AI image generator--which launched in August and allows users to make precise edits to AI images while preserving the quality of faces, animals, or other objects in the background--has caused a 300 million image surge for Gemini in the first few days in September already, according to a post on X by Josh Woodward, VP of Google Labs and Google Gemini. Huang, whose company was among a cohort of big US technology companies to announce investments into data centers, supercomputers, and AI research in the UK on Tuesday, is on a high. Speaking ahead of a white-tie event with UK prime minister Keir Starmer (where he plans to wear custom black leather tails), he's boisterously optimistic about the future of AI in the UK, saying the country is "too humble" about the country's potential for AI advancements. He cites the UK's pedigree in themes as wide as the industrial revolution, steam trains, DeepMind (now owned by Google), and university researchers, as well as other tangential skills. "No one fries food better than you do," he quips. Nvidia announced a $683 million equity investment in datacenter builder Nscale this week, a move that--alongside investments from OpenAI and Microsoft--has propelled the company to the epicenter of this AI push in the UK. Huang estimates that Nscale will generate more than $68 billion in revenues over six years. "I'll go on record to say I'm the best thing that's ever happened to him," he says, referring to Nscale CEO Josh Payne. "As AI services get deployed--I'm sure that all of you use it.


Why This Artist Isn't Afraid of AI's Role in the Future of Art

TIME - Tech

As AI enters the workforce and seeps into all facets of our lives at unprecedented speed, we're told by leaders across industries that if you're not using it, you're falling behind. Yet when AI's use in art enters the conversation, some retreat in discomfort, shunning it as an affront to the very essence of art. This ongoing debate continues to create disruptions among artists. AI is fundamentally changing the creative process, and its purpose, significance, and influence are subjective to one's own values--making its trajectory hard to predict, and even harder to confront. Miami-based Panamanian photographer Dahlia Dreszer stands out as an optimist and believer in AI's powers.


An AI Image Generator's Exposed Database Reveals What People Really Used It For

WIRED

Tens of thousands of explicit AI-generated images, including AI-generated child sexual abuse material, were left open and accessible to anyone on the internet, according to new research seen by WIRED. An open database belonging to an AI image-generation firm contained more than 95,000 records, including some prompt data and images of celebrities such as Ariana Grande, the Kardashians, and Beyoncé de-aged to look like children. The exposed database, which was discovered by security researcher Jeremiah Fowler, who shared details of the leak with WIRED, is linked to South Korea–based website GenNomis. The website and its parent company, AI-Nomis, hosted a number of image generation and chatbot tools for people to use. More than 45 GB of data, mostly made up of AI images, was left in the open.


How AI images are 'flattening' Indigenous cultures – creating a new form of tech colonialism

AIHub

It feels like everything is slowly but surely being affected by the rise of artificial intelligence (AI). And like every other disruptive technology before it, AI is having both positive and negative outcomes for society. One of these negative outcomes is the very specific, yet very real cultural harm posed to Australia's Indigenous populations. The National Indigenous Times reports Adobe has come under fire for hosting AI-generated stock images that claim to depict "Indigenous Australians", but don't resemble Aboriginal and Torres Strait Islander peoples. Some of the figures in these generated images also have random body markings that are culturally meaningless.


Google DeepMind's latest medical breakthrough borrows a trick from AI image generators

Engadget

Much of the recent AI hype train has centered around mesmerizing digital content generated from simple prompts, alongside concerns about its ability to decimate the workforce and make malicious propaganda much more convincing. However, some of AI's most promising -- and potentially much less ominous -- work lies in medicine. A new update to Google's AlphaFold software could lead to new disease research and treatment breakthroughs. AlphaFold software, from Google DeepMind and (the also Alphabet-owned) Isomorphic Labs, has already demonstrated that it can predict how proteins fold with shocking accuracy. It's cataloged a staggering 200 million known proteins, and Google says millions of researchers have used previous versions to make discoveries in areas like malaria vaccines, cancer treatment and enzyme designs.


Can AI image generators be policed to prevent explicit deepfakes of children?

The Guardian

Child abusers are creating AI-generated "deepfakes" of their targets in order to blackmail them into filming their own abuse, beginning a cycle of sextortion that can last for years. Creating simulated child abuse imagery is illegal in the UK, and Labour and the Conservatives have aligned on the desire to ban all explicit AI-generated images of real people. But there is little global agreement on how the technology should be policed. Worse, no matter how strongly governments take action, the creation of more images will always be a press of a button away – explicit imagery is built into the foundations of AI image generation. In December, researchers at Stanford University made a disturbing discovery: buried among the billions of images making up one of the largest training sets for AI image generators was hundreds, maybe thousands, of instances of child sexual abuse material (CSAM).


Meta's AI is accused of being RACIST: Shocked users say Mark Zuckerberg's chatbot refuses to imagine an Asian man with a white woman

Daily Mail - Science & tech

Just weeks after Google was forced to shut down its'woke' AI, another tech giant faces criticism over its bot's racial bias. Meta's AI image generator has been accused of being'racist' after users discovered it was unable to imagine an Asian man with a white woman. The AI tool, created by Facebook's parent company, is able to take almost any written prompt and convert it into a shockingly realistic image within seconds. However, users found the AI was unable to create images showing mixed-race couples, despite the fact that Meta CEO Mark Zuckerberg is himself married to an Asian woman. On social media, commenters have criticised this as an example of the AI's racial bias, with one describing the AI as'racist software made by racist engineers'.


Microsoft ignored safety problems with AI image generator, engineer complains

The Guardian

An artificial intelligence engineer at Microsoft published a letter Wednesday alleging that the company's AI image generator lacks basic safeguards against creating violent and sexualized images. In the letter, engineer Shane Jones states that his repeated attempts to warn Microsoft management about the problems failed to result in any action. Jones said he sent the message to the Federal Trade Commission and Microsoft's board of directors. "Internally the company is well aware of systemic issues where the product is creating harmful images that could be offensive and inappropriate for consumers," Jones states in the letter, which he published on LinkedIn. He lists his title as "principal software engineering manager".


AI Art is Theft: Labour, Extraction, and Exploitation, Or, On the Dangers of Stochastic Pollocks

Goetze, Trystan S.

arXiv.org Artificial Intelligence

Since the launch of applications such as DALL-E, Midjourney, and Stable Diffusion, generative artificial intelligence has been controversial as a tool for creating artwork. While some have presented longtermist worries about these technologies as harbingers of fully automated futures to come, more pressing is the impact of generative AI on creative labour in the present. Already, business leaders have begun replacing human artistic labour with AI-generated images. In response, the artistic community has launched a protest movement, which argues that AI image generation is a kind of theft. This paper analyzes, substantiates, and critiques these arguments, concluding that AI image generators involve an unethical kind of labour theft. If correct, many other AI applications also rely upon theft.